22 research outputs found

    On the estimation of face recognition system performance using image variability information

    Get PDF
    The type and amount of variation that exists among images in facial image datasets significantly affects Face Recognition System Performance (FRSP). This points towards the development of an appropriate image Variability Measure (VM), as applied to face-type image datasets. Given VM, modeling of the relationship that exists between the image variability characteristics of facial image datasets and expected FRSP values, can be performed. Thus, this paper presents a novel method to quantify the overall data variability that exists in a given face image dataset. The resulting Variability Measure (VM) is then used to model FR system performance versus VM (FRSP/VM). Note that VM takes into account both the inter- and intra-subject class correlation characteristics of an image dataset. Using eleven publically available datasets of face images and four well-known FR systems, computer simulation based experimental results showed that FRSP/VM based prediction errors are confined in the region of 0 to 10%

    Optimisation of Mobile Communication Networks - OMCO NET

    Get PDF
    The mini conference “Optimisation of Mobile Communication Networks” focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University. The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing

    Practical classification methods for indoor positioning

    Get PDF
    Location awareness is of primary importance in a wealth of applications such as transportation, mobile health systems, augmented reality and navigation. For example, in busy transportation areas (such as airports) providing clear, personalised notifications and directions, can reduce delays and improve the passenger journeys. Currently some applications provide easy access to information. These travel related applications can become context aware via the availability of accurate indoor/outdoor positioning. However, there are barriers that still have to overcome. One such barrier is the time required to set up and calibrate indoor positioning systems, another is the challenge of scalability with regard to the processing requirements of indoor positioning algorithms. This paper investigates the relationship between the calibration data and positioning system accuracy and analyses the performance of a k-Nearest Neighbour (k-NN) based positioning algorithm using real GSM data. Furthermore, the paper proposes a positioning scheme based on Gaussian Mixture Models (GMM). Experimental results show that the proposed GMM algorithm (without post-filtering) provides high levels of localization accuracy and successfully copes with the scalability problems that the conventional k-NN approach faces

    Advances in classification of EEG signals via evolving fuzzy classifiers and dependant multiple HMMs.

    Get PDF
    Two novel approaches to the problem of brain signals (electroencephalogram (EEG)) classification are introduced in the paper. The first method is based on a modular probabilistic network architecture that employs multiple dependant hidden Markov models (DM-HMM-D) on the input features (channels). The second method, eClass, is based on an on-line evolvable fuzzy rule base of EEG signal prototypes that represent each class and take into consideration the spatial proximity between input signals. Both approaches use supervised learning but differ in their mode of operation. eClass is designed recursively, on-line, and has an evolvable structure, while DM-HMM-D is trained off-line, in a block-based mode, and has a fixed architecture. Both methods have been extensively tested on real EEG data that is recorded during several experimental sessions involving a single female subject who is exposed to mild pain induced by a laser beam. Experimental results illustrate the viability of the proposed approaches and their potential in solving similar classification problems. (c) Elsevie

    Use of Cubic Bézier Curves for Route Planning

    No full text
    We consider the use of cubic Bézier curves for planning UAV routes. The proposed approach allows the user to trade off the length of the solution route with the level of risk/hazard exposure encountered. Exhaustive search is used to place control points on a 2D grid superimposed on the environment. High quality routes are generated using relatively course grids. Comparison is made with the graph theoretic A* techniqu

    Gradient-based multi-resolution image fusion

    No full text
    A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems

    Sensor noise effects on signal-level image fusion performance.

    No full text
    The aim of this paper is twofold: (i) to define appropriate metrics which measure the effects of input sensor noise on the performance of signal-level image fusion systems and (ii) to employ these metrics in a comparative study of the robustness of typical image fusion schemes whose inputs are corrupted with noise. Thus system performance metrics for measuring both absolute and relative degradation in fused image quality are proposed when fusing noisy input modalities. A third metric, which considers fusion of noise patterns, is also developed and used to evaluate the perceptual effect of noise corrupting homogenous image regions (i.e. areas with no salient features). These metrics are employed to compare the performance of different image fusion methodologies and feature selection/information fusion strategies operating under noisy input conditions. Altogether, the performance of seventeen fusion schemes is examined and their robustness to noise considered at various input signal-to-noise ratio values for three types of sensor noise characteristics

    Lowering costs in anticoagulation therapy.

    No full text
    The objectives of this work in progress are to improve the levels of care in anticoagulation therapy while reducing the effort required and the costs. This will be achieved by the preprocessing of the available real world data and projecting it into a suitable analysis space before modelling with individualised, constantly learning Evolving Takagi Sugeno [1] and Connectivist Network-type models whose structure and parameters are the result of extensive research. It is hoped that this will lead to accurate predictions of the future levels of anticoagulation from a given dose recommendation. (c) Springe

    Using macroscopic information in image segmentation

    No full text
    Post processing “macroscopically” output segmented images obtained from conventional image segmentation (IS) techniques, leads into the concept of Micro-Macro Image Segmentation (MMIS). MMIS pays extra attention to information extracted from relatively large image regions and as a result, overall system segmentation performance improves both subjectively and objectively. The proposed post processing scheme is generic, in the sense that can be used together with any other existing segmentation approach. Thus given an input segmented image, MMIS has the ability to automatically select an appropriate number of regions and classes in a way that helps object oriented visual information to become more apparent in the final segmented output image. Computer simulation results clearly indicate that significant IS performance benefits can be obtained by augmenting conventional IS schemes within an MMIS framework, with or without input images being corrupted by additive Gaussian noise
    corecore